Towards Securing Machine Learning Models Against Membership Inference Attacks

نویسندگان

چکیده

From fraud detection to speech recognition, including price prediction, Machine Learning (ML) applications are manifold and can significantly improve different areas. Nevertheless, machine learning models vulnerable exposed security privacy attacks. Hence, these issues should be addressed while using ML preserve the of data used. There is a need secure models, especially in training phase datasets minimise information leakage. In this paper, we present an overview threats vulnerabilities, highlight current progress research works proposing defence techniques against The relevant background for attacks occurring both testing/inferring phases introduced before presenting detailed Membership Inference Attacks (MIA) related countermeasures. introduce countermeasure membership inference on Conventional Neural Networks (CNN) based dropout L2 regularization. Through experimental analysis, demonstrate that technique mitigate risks MIA ensuring acceptable accuracy model. Indeed, CNN model two CIFAR-10 CIFAR-100, empirically verify ability our strategy decrease impact compare results five classifiers. Moreover, solution achieve trade-off between performance mitigation attack.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Securing Against Insider Attacks

e are all creatures of habit; the way we think and the views we take are conditioned by our education, society as a whole, and, at a much deeper level, our cultural memories or instinct. It is sometimes surprising how much the past can unconsciously affect today’s thinking. George Santayana famously observed, “Those who cannot remember the past are condemned to repeat it.” But when it comes to ...

متن کامل

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...

متن کامل

Decision-based Adversarial Attacks: Reliable Attacks against Black-box Machine Learning Models

Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class proba...

متن کامل

Securing dynamic group membership information over multicast: attacks and immunization

In secure multicast communications, key management is employed to prevent unauthorized access to the multicast content. Key management, however, can disclose the information about the dynamics of the group membership to inside attackers, which is a potential threat to many multicast applications. In this paper, we investigated several attack strategies for stealing group dynamic information and...

متن کامل

Towards Securing Medical Documents from Insider Attacks

Medical organizations have sensitive health related documents. Unauthorized access attempts for these should not only be prevented but also detected in order to ensure correct treatment of the patients and to capture the malicious intent users. Such organizations normally rely on the principle of least privileges together with the deployment of some commercial available software to cope up this...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computers, materials & continua

سال: 2022

ISSN: ['1546-2218', '1546-2226']

DOI: https://doi.org/10.32604/cmc.2022.019709